398 research outputs found

    An innovative quality improvement curriculum for third-year medical students

    Get PDF
    Background: Competence in quality improvement (QI) is a priority for medical students. We describe a self-directed QI skills curriculum for medical students in a 1-year longitudinal integrated third-year clerkship: an ideal context to learn and practice QI. Methods: Two groups of four students identified a quality gap, described existing efforts to address the gap, made quantifying measures, and proposed a QI intervention. The program was assessed with knowledge and attitude surveys and a validated tool for rating trainee QI proposals. Reaction to the curriculum was assessed by survey and focus group. Results: Knowledge of QI concepts did not improve (mean knowledge score±SD): pre: 5.9±1.5 vs. post: 6.6±1.3, p=0.20. There were significant improvements in attitudes (mean topic attitude score±SD) toward the value of QI (pre: 9.9±1.8 vs. post: 12.6±1.9, p=0.03) and confidence in QI skills (pre: 13.4±2.8 vs. post: 16.1±3.0, p=0.05). Proposals lacked sufficient analysis of interventions and evaluation plans. Reaction was mixed, including appreciation for the experience and frustration with finding appropriate mentorship. Conclusion: Clinical-year students were able to conduct a self-directed QI project. Lack of improvement in QI knowledge suggests that self-directed learning in this domain may be insufficient without targeted didactics. Higher order skills such as developing measurement plans would benefit from explicit instruction and mentorship. Lessons from this experience will allow educators to better target QI curricula to medical students in the clinical years

    Concurrent validity of self-rating scale of self-directed learning and self-directed learning instrument among Italian nursing students

    Get PDF
    BACKGROUND: Self-Directed Learning develops when students take the initiative for their learning, recognising needs, formulating goals, identifying resources, implementing appropriate strategies and evaluating learning outcomes. This should be seen as a collaborative process between the nurse educator and the learner. At the international level, various instruments have been used to measure Self-Directed Learning abilities (SDL), both in original and in culturally-adapted versions. However, few instruments have been subjected to full validation, and no gold standard reference has been established to date. In addition, few researchers have adopted the established tools to assess the concurrent validity of the emerging new tools. Therefore, the aim of this study was to measure the concurrent validity between the Self-Rating Scale of Self-Directed Learning (SRSSDL_Ita) - Italian version and the Self-Directed Learning Instruments (SDLI) in undergraduate nursing students. METHODS: A concurrent validity study design was conducted in a Bachelor level nursing degree programme located in Italy. All nursing students attending the first, second or third year (n=428) were the target sample. The SRSSDL_Ita, and the SDLI were used. The Pearson correlation was used to determine the concurrent validity between the instruments; the confidence of intervals (CI 95%) bias-corrected and accelerated bootstrap (BCa), were also calculated. RESULTS: The majority of participants were students attending their first year (47.9%), and were predominately female (78.5%). Their average age was 22.5\ub14.1. The SDL abilities scores, as measured with the SRSSDL_Ita (min 40, max 200), were, on average, 160.79 (95% CI 159.10-162.57; median 160); while with the SDLI (min 20, max 100), they were on average 82.57 (95% CI 81.79-83.38; median 83). The Pearson correlation between the SRSSDL_Ita and SDLI instruments was 0.815 (CI BCa 95% 0.774-0.848), (p=0.000). CONCLUSIONS: The findings confirm the concurrent validity of the SRSSDL_Ita with the SDLI. The SRSSDL_Ita instrument can be useful in the process of identifying Self-Directed Learning abilities, which are essential for students to achieve the expected learning goals and become lifelong learners

    Selecting and implementing overview methods: implications from five exemplar overviews

    Get PDF
    This is the final version of the article. Available from BioMed Central via the DOI in this record.Background Overviews of systematic reviews are an increasingly popular method of evidence synthesis; there is a lack of clear guidance for completing overviews and a number of methodological challenges. At the UK Cochrane Symposium 2016, methodological challenges of five overviews were explored. Using data from these five overviews, practical implications to support methodological decision making of authors writing protocols for future overviews are proposed. Methods Methods, and their justification, from the five exemplar overviews were tabulated and compared with areas of debate identified within current literature. Key methodological challenges and implications for development of overview protocols were generated and synthesised into a list, discussed and refined until there was consensus. Results Methodological features of three Cochrane overviews, one overview of diagnostic test accuracy and one mixed methods overview have been summarised. Methods of selection of reviews and data extraction were similar. Either the AMSTAR or ROBIS tool was used to assess quality of included reviews. The GRADE approach was most commonly used to assess quality of evidence within the reviews. Eight key methodological challenges were identified from the exemplar overviews. There was good agreement between our findings and emerging areas of debate within a recent published synthesis. Implications for development of protocols for future overviews were identified. Conclusions Overviews are a relatively new methodological innovation, and there are currently substantial variations in the methodological approaches used within different overviews. There are considerable methodological challenges for which optimal solutions are not necessarily yet known. Lessons learnt from five exemplar overviews highlight a number of methodological decisions which may be beneficial to consider during the development of an overview protocol.The overview conducted by Pollock [19] was supported by a project grant from the Chief Scientist Office of the Scottish Government. The overview conducted by McClurg [21] was supported by a project grant by the Physiotherapy Research Foundation. The overview by Hunt [22] was supported as part of doctoral programme funding by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South West Peninsula (PenCLAHRC). The overview conducted by Estcourt [20] was supported by an NIHR Cochrane Programme Grant for the Safe and Appropriate Use of Blood Components. The overview conducted by Brunton [23] was commissioned by the Department of Health as part of an ongoing programme of work on health policy research synthesis. Alex Pollock is employed by the Nursing, Midwifery and Allied Health Professions (NMAHP) Research Unit, which is supported by the Chief Scientist Office of the Scottish Government. Pauline Campbell is supported by the Chief Nurses Office of the Scottish Government

    The first biosimilar approved for the treatment of osteoporosis

    Get PDF
    To demonstrate the clinical comparability between RGB-10 (a biosimilar teriparatide) and the originator, a comparative pharmacokinetic trial was conducted. The study was successful in establishing bioequivalence. Marketing authorisation for RGB-10 (Terrosa®) was granted by the European Medicines Agency in 2017.Teriparatide, the first bone anabolic agent, is the biologically active fragment of human parathyroid hormone. The imminent patent expiry of the originator will open the door for biosimilars to enter the osteology market, thereby improving access to a highly effective, yet prohibitively expensive therapy.Subsequent to establishing comparability on the quality and non-clinical levels between RGB-10, a biosimilar teriparatide, and its reference product (Forsteo®), a randomised, double-blind, 2-way cross-over comparative study (duration: four days) was conducted in 54 healthy women (ages: 18 to 55 years) to demonstrate the pharmacokinetic/pharmacodynamic (PK/PD) equivalence and comparable safety of these products. Extents of exposure (AUC0-tlast) and peak exposure (Cmax), as measured by means of ELISA, were evaluated as co-primary PK endpoints, and serum calcium levels, as measured using standard automated techniques, were assessed for PD effects. Safety was monitored throughout the study.The 94.12% CIs for the ratio of the test to the reference treatments, used due to the two-stage design (85.20-98.60% and 85.51-99.52% for AUC0-tlast and Cmax, respectively), fell within the 80.00-125.00% acceptance range. The calcium PD parameters were essentially identical with geometric mean ratios (GMRs) of 99.93% and 99.87% for AUC and Cmax, respectively. Analysis of the safety data did not reveal any differences between RGB-10 and its reference.Based on the high level of similarity in the preclinical data and the results of this clinical study, marketing authorisation for RGB-10 (Terrosa®) was granted by the European Medicines Agency (EMA) in 2017

    A web-based clinical decision tool to support treatment decision-making in psychiatry: a pilot focus group study with clinicians, patients and carers

    Get PDF
    Background. Treatment decision tools have been developed in many fields of medicine, including psychiatry, however benefits for patients have not been sustained once the support is withdrawn. We have developed a web-based computerised clinical decision support tool (CDST), which can provide patients and clinicians with continuous, up-to-date, personalised information about the efficacy and tolerability of competing interventions. To test the feasibility and acceptability of the CDST we conducted a focus group study, aimed to explore the views of clinicians, patients and carers. Methods. The CDST was developed in Oxford. To tailor treatments at an individual level, the CDST combines the best available evidence from the scientific literature with patient preferences and values, and with patient medical profile to generate personalised clinical recommendations. We conducted three focus groups comprising of three different participant types: consultant psychiatrists, participants with mental health diagnosis and/or experience of caring for someone with a mental health diagnosis, and primary care practitioners and nurses. Each 1-hour focus group started with a short visual demonstration of the CDST. To standardise the discussion during the focus groups, we used the same topic guide that covered themes relating to the acceptability and usability of the CDST. Focus groups were recorded and any identifying participant details were anonymised. Data were analysed thematically and managed using the Framework method and the constant comparative method. Results. The focus groups took place in Oxford between October 2016 and January 2017. Overall 31 participants attended (12 consultants, 11 primary care practitioners and 8 patients or carers). The main themes that emerged related to CDST applications in clinical practice, communication, conflicting priorities and record keeping. CDST was considered a useful clinical decision support, with recognised value in promoting clinician-patient collaboration and contributing to the development of personalised medicine. One major benefit of the CDST was perceived to be the open discussion about the possible side-effects of medications. Participants from all the three groups, however, universally commented that the terminology and language presented on the CDST were too medicalised, potentially leading to ethical issues around consent to treatment. Conclusions. The CDST can improve communication pathways between patients, carers and clinicians, identifying care priorities and providing an up-to-date platform for implementing evidence-based practice, with regard to prescribing practices

    Budd-Chiari Syndrome: Long term success via hepatic decompression using transjugular intrahepatic porto-systemic shunt

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Budd-Chiari syndrome (BCS) generally implies thrombosis of the hepatic veins and/or the intrahepatic or suprahepatic inferior vena cava. Treatment depends on the underlying cause, the anatomic location, the extent of the thrombotic process and the functional capacity of the liver. It can be divided into medical treatment including anticoagulation and thrombolysis, radiological procedures such as angioplasty and transjugular intrahepatic porto-systemic shunt (TIPS) and surgical interventions including orthotopic liver transplantation (OLT). Controlled trials or reports on larger cohorts are limited due to rare disease frequency. The aim of this study was to report our single centre long term results of patients with BCS receiving one of three treatment options i.e. medication only, TIPS or OLT on an individually based decision of our local expert group.</p> <p>Methods</p> <p>20 patients with acute, subacute or chronic BCS were treated between 1988 and 2008. Clinical records were analysed with respect to underlying disease, therapeutic interventions, complications and overall outcome.</p> <p>Results</p> <p>16 women and 4 men with a mean age of 34 ± 12 years (range: 14-60 years) at time of diagnosis were included. Myeloproliferative disorders or a plasmatic coagulopathy were identified as underlying disease in 13 patients, in the other patients the cause of BCS remained unclear. 12 patients presented with an acute BCS, 8 with a subacute or chronic disease. 13 patients underwent TIPS, 4 patients OLT as initial therapy, 2 patients required only symptomatic therapy, and one patient died from liver failure before any specific treatment could be initiated. Eleven of 13 TIPS patients required 2.5 ± 2.4 revisions (range: 0-8). One patient died from his underlying hematologic disease. The residual 12 patients still have stable liver function not requiring OLT. All 4 patients who underwent OLT as initial treatment, required re-OLT due to thrombembolic complications of the graft. Survival in the TIPS group was 92.3% and in the OLT group 75% during a median follow-up of 4 and 11.5 years, respectively.</p> <p>Conclusion</p> <p>Our results confirm the role of TIPS in the management of patients with acute, subacute and chronic BCS. The limited number of patients with OLT does not allow to draw a meaningful conclusion. However, the underlying disease may generate major complications, a reason why OLT should be limited to patients who cannot be managed by TIPS.</p

    Prognostic value of simple frailty and malnutrition screening tools in patients with acute heart failure due to left ventricular systolic dysfunction

    Get PDF
    Background: Frailty and malnutrition are common in patients with heart failure (HF), and are associated with adverse outcomes. We studied the prognostic value of three malnutrition and three frailty indices in patients admitted acutely to hospital with HF. Methods: 265 consecutive patients [62% males, median age 80 (interquartile range (IQR): 72–86) years, median NTproBNP 3633 (IQR: 2025–6407) ng/l] admitted with HF between 2013 and 2014 were enrolled. Patients were screened for frailty using the Derby frailty index (DFI), acute frailty network (AFN) frailty criteria, and clinical frailty scale (CFS) and for malnutrition using the geriatric nutritional risk index (GNRI), controlling nutritional status (CONUT) score and prognostic nutritional index (PNI). Results: According to the CFS (> 4), DFI, and AFN, 53, 50, and 53% were frail, respectively. According to the GNRI (≤ 98), CONUT score (> 4), and PNI (≤ 38), 46, 46, and 42% patients were malnourished, respectively. During a median follow-up of 598 days (IQR 319–807 days), 113 patients died. One year mortality was 1% for those who were neither frail nor malnourished; 15% for those who were either malnourished or frail; and 65% for those who were both malnourished and frail. Amongst the malnutrition scores, PNI, and amongst the frailty scores, CFS increased model performance most compared with base model. A final model, including CFS and PNI, increased c-statistic for mortality prediction from 0.68 to 0.84. Conclusion: Worsening frailty and malnutrition indices are strongly related to worse outcome in patients hospitalised with HF

    Should the Arteriovenous Fistula Be Created before Starting Dialysis?: A Decision Analytic Approach

    Get PDF
    Background: An arteriovenous fistula (AVF) is considered the vascular access of choice, but uncertainty exists about the\ud optimal time for its creation in pre-dialysis patients. The aim of this study was to determine the optimal vascular access\ud referral strategy for stage 4 (glomerular filtration rate ,30 ml/min/1.73 m2) chronic kidney disease patients using a decision\ud analytic framework.\ud Methods: A Markov model was created to compare two strategies: refer all stage 4 chronic kidney disease patients for an\ud AVF versus wait until the patient starts dialysis. Data from published observational studies were used to estimate the\ud probabilities used in the model. A Markov cohort analysis was used to determine the optimal strategy with life expectancy\ud and quality adjusted life expectancy as the outcomes. Sensitivity analyses, including a probabilistic sensitivity analysis, were\ud performed using Monte Carlo simulation.\ud Results: The wait strategy results in a higher life expectancy (66.6 versus 65.9 months) and quality adjusted life expectancy\ud (38.9 versus 38.5 quality adjusted life months) than immediate AVF creation. It was robust across all the parameters except\ud at higher rates of progression and lower rates of ischemic steal syndrome.\ud Conclusions: Early creation of an AVF, as recommended by most guidelines, may not be the preferred strategy in all predialysis\ud patients. Further research on cost implications and patient preferences for treatment options needs to be done\ud before recommending early AVF creation
    corecore